perm filename HEWITT.TEX[S87,JMC]1 blob sn#840187 filedate 1987-05-19 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	%%-*- Mode: Text -*-
C00067 ENDMK
CāŠ—;
%%-*- Mode: Text -*-
%% ---Note, specifying ``clbiba'' as an auxiliary option to the
%% ---\documentstyle{article} means to use the Closed Bibliography
%% ---Style in Formatting the Reference List. There are four
%% ---possible bib-styles: (1) opbiba, (2) opbibb, (3) clbiba
%% ---and (4) clbibb. The main difference between the ``Open'' and
%% ---``Closed'' bibliography styles is that ``Open'' styles insert
%% ---a line-break between the major divisions in a bibliographic
%% ---entry, whereas the ``Closed'' styles don't. I think the ``Closed''
%% ---styles might be more familiar to ex-Scribe users.
%% ---In addition to this entry, you should look at the end of this 
%% ---file to see what else has to be done in order for BiBTeX to work. 

\documentstyle[11pt,clbiba]{article}

\begin{document}

%% As promised in the preamble, this is the rest of the stuff you
%% need to know about using BiBTeX.
%% Here (or at any juncture *After* the \begin{document}) 
%% LaTeX should be told how to format the REFERENCES list: again, several
%% styles are available:

%% (1) plain ---the ``right'' thing for van Leunen fans.
%% (2) unsrt ---unsorted, in order of citation.
%% (4) abbrv ---tries to do a *plain* but abbreviates. words.
%% (3) alpha ---alphabetically...similar to our style.
%% (5) full  ---alphabetically...our style.

%% For our purposes, *full* is a likely choice as it most closely
%% mimics how we've cited in the past.

\bibliographystyle{full}

\def\fghc{{\it FGHC\/}}
\def\actorIf{{\it if\/}}
\def\actorThen{{\it then\/}}
\def\actorElse{{\it else\/}}
\def\actorReturn{{\it return\/}}
\def\actorComplain{{\it complain\/}}
\def\actorReady{{\it ready\/}}
\def\DefBehavior{{\it DefBehavior\/}}
\def\DefName{{\it DefName\/}}
\def\actorSend{{\it send\/}}

\pagestyle{myheadings}
\markright{****** DRAFT******}

\title{Organizational Semantics}

\author{Carl Hewitt\\
MIT Artificial Intelligence Laboratory \\
545 Technology Square \\
Cambridge, Massachusetts 02139\\
\\
(617)253-5873\\
\\
} 

\bigskip
\maketitle

\vspace{.5in}

\begin{abstract}

Organizational semantics is an approach to understanding how to take
organizational action in the face of inconsistent beliefs and
conflicting proposals for action. 

\end{abstract}

\newpage



\section{Introduction}

The approach of Organizational Semantics is based on the thesis that
principles and mechanisms similar to the ones used by large scale
human organizations can be used effectively to empower large scale computer
systems.  This approach addresses issues concerning systems with the
following capabilities:

\begin{itemize}

\item Knowledge in the face of internal inconsistency

\item Organizational action in the face of conflicts

\item How self knowledge can be used by a system to improve its own
performance.

\end{itemize}

Organizational semantics takes issue with the following slogan
which is currently popular: 

\begin{quote}
                In the Knowledge lies the Power.
\end{quote}

To me the above perspective is far too narrow.  I prefer a more broad
based approach which addresses wider concerns.

So I propose to turn the above slogan on its side and maintain
instead:

\begin{quote}
                In the {\em Organization} lies the Power.
\end{quote}

{\em Knowledge is usually not the most important factor
in the power of an organization.  Other attributes such as management
skills and effective execution are usually more important.}
Furthermore a weak organization can have a tremendous amount
of knowledge provided to it and more available for the asking.
Organizations can literally choke on their knowledge.

Organizational analysis important for computer as well as human
systems.  It brings an additional perspective to evaluating the
plausibility of different architectures for intelligent systems.  In
the sections below, I outline some of the implications.


\section{Inconsistency and Conflict}


Traditional approaches do not sufficiently address fundamental problems such as
{\em inconsistency} and {\em conflict}.

Contradictory beliefs and conflicting actions are
engendered by the enormous interconnectivity and interdependence of
organizational knowledge that comes from multiple sources and
viewpoints.  The knowledge of any physical object has extensive {\it
spatiotemporal, causal, terminological, evidential}, and {\it
communicative} connections with other aspects of the organization's
affairs \cite{byte-challenge} \cite{offices-are-open-systems}.  The
interconnectivity generates an enormous network of knowledge which is
inherently inconsistent because of multiple sources making
contributions at different times and places.

The inconsistency of the organizational knowledge bases is deadly to
inference because it undermines the credibility of logical
deduction as a justification for belief.  From contradictory data
bases literally {\it anything} can be deductively inferred!

\section {Microtheories as Tools in Organizations}

A microtheory is a relatively small idealized model
that embodies a model of some physical system.  A microtheory should
be internally consistent and clearly demarcated.  Any modification of
a microtheory is a new microtheory.  General relativity, Peano
arithmetic, a spread sheet model of a company's projected sales, and a
Spice simulation of an integrated circuit are examples of
microtheories.  Microtheories are simple because they have simple
axiomatizations.  The model axiomatized by a microtheory may be
enormously complicated and even have properties which are formally
undecidable in finite time by arbitrarily large computers.  Computer
systems will require hundreds of thousands of microtheories in order
to effectively participate in organizational work.

In general organizations deal with {\em
inconsistent} microtheories that cannot always be measured against one
another in a pointwise fashion.  Debate and negotiation are used to
compare rival microtheories without assuming that there is a {\it
fixed\/} common standard of reference.  There is no global axiomatic
theory of the world in which we live that gradually becomes more
complete as more microtheories are debugged and introduced.  Instead
each problematical concrete situation is dealt with by using
negotiation and debate among the available overlapping, usually
contradictory, microtheories that are adapted to the situation at hand.
For many purposes in organizations, it is preferable to work with
microtheories which are small and oversimplified, rather than large
and full of caveats and conditions \cite{false-models}.  These small
oversimplified microtheories contradict one another.

\section{Strengths of Logical Deduction}

Logical deduction is a powerful tool for working {\em within} a
microtheory.  The strengths of logical deduction include:

\begin{itemize}

\item {\bf Well Understood:} Logical deduction is a very well
understood and characterized process.  Rigorous model theories exist
for many logics including the predicate calculus, intuitionistic
logics, and modal logics.

\item {\bf Validity Locally Decidable:} The validity of a deductive
proof is supposed to be timeless and acontextual.  (If a deductive
proof is valid at all, then it is supposed to be valid for all times
and places.)  The timeless and acontextual character of logical
deduction is a tremendous advantage in separating the proof creation
situation from the proof checking context.  In addition logical
deductive proof is supposed to be checkable solely from the text of
the proof.  In this way proofs can be checked by multiple actors at
different times and places perhaps using different methods adding to
the confidence in the deductions.  In order to be checkable solely
from its text, the proof checking process cannot require making any
observations or consulting any external sources of information.
Consequently all of the premises of each proof step as to place, time,
objects, etc. must be explicit.  In effect a {\em situational closure}
must be taken for each deductive step and for the whole deductive
proof tree.  Proof checking proceeds in a closed world in which the
axioms and rules of deductive inference have been laid out explicitly
beforehand.  Nonmonotonic Logics provide schemas for closure operators
on sets of axioms which results in stronger more complete axiom sets.
For the purposes of this paper, it suffices to treat the deductive
proofs of Nonmonotonic Logics as formal objects that are not
fundamentally different than the formal proofs of any other deductive
system.

\end{itemize}

The advantages of logical deductive reasoning within a microtheory are
enormously important.  However, we need to look at the other side of the
coin to examine what is necessarily left out of the logical deductive
framework.

\section{Limitations of Logical Deduction}

The advantages logical deductive inference come at a tremendous cost:
{\em The validity of logical deductive proofs is independent of the
social spatio-temporal context in which they are created.}

Let {\tt S} be the statement {\tt safe(diablo-canyon, july-1-1987)}. 
Consider the proof of {\tt S}.

\subsection{The Validity of Logical Deductive Proofs is Acontextual.}

The requirements of logical deduction require that validity of the
proof of {\tt S} must be independent of whether the deduction takes
place after July 1, 1987 and thus concerns the past or takes place
before July 1, 1987 and concerns the future.  Logical reasoning can be
used before the situation described by {\tt S} to {\it predict}
predict the safety of the plant.  Or it can be used after the
situation described by S to {\it analyze} the previous safety.

In either case the validity of the logical deductive proof of S is
crucially {\em independent} of whether July 1, 1987 lies in past or
the future and therefore the current time cannot be taken into account
in the proof.  In other words the reality of {\em now} cannot be
introduced into logical deductive proofs.

I claim that {\em logical deductive proofs do not provide an adequate
basis for decision making since their validity is independent of the
decision making situation}.  The basis for effective decision making is
usually crucially situation dependent.

Nevertheless, keeping in mind their acontextual limitations, it is
extremely valuable to use logical deduction as important tool in the
analysis the internal structure of microtheories for the operation of
the nuclear plant, its possible failure mechanisms, and their
consequences.

\subsection{Microtheories Contradict Each Other.}

The multitude of microtheories concerning the safety of the Diablo
Canyon Nuclear Plant contradict one another.  There are microtheories
that detail convincing disaster scenarios in case of earthquake as
well as microtheories that detail extensive redundancy mechanisms that
will cope with any likely consequences of an earthquake.  Therefore
the logical deductive inferences from any microtheory are not very
convincing by themselves because they are contradicted by the
inferences of competing microtheories.  For example {\tt
safe(diablo-canyon, july-1-1987)} is deducible in a microtheory named
{\tt safe-emergency-shut-down} whereas {\tt NOT(safe(diablo-canyon,
july-1-1987))} is deducible in another named {\tt melt-down-scenario}.
Logical deduction does not provide any way to resolve the
contradictions.

These inherent contradictions are among of the most prominent features
of the intellectual landscape.  The burden of proof is on those who
believe that knowledge concerning the safety of the Diablo Canyon
nuclear plant on July 1, 1987 or the knowledge concerning any other
situated physical object for that matter can somehow be consistently
axiomatized.

\subsection{Microtheories Support Conflicting Actions}

Over the years there have been some interesting attempts to use
logical deductive inference to control action.  John McCarthy
introduced a predicate named {\tt SHOULD} so that sentences such as
{\tt SHOULD(walk(i,car))} could be expressed.  His idea was that such
deductive inferences concerning such predicates could control what
would be done.

However, {\em microtheories for preferences are inherently in conflict
with each other because of the tradeoffs inherent in real situations}.
Consider the conflicts inherent in setting the price for some product
(called X).  In general greater profitability is preferable to lower
profitability and greater market share is preferable to lower market
share.  Increasing the price tends to increase profitability but
decrease market share, thus creating an inherent conflict.  In
practice this means that {\tt SHOULD(raise-price(x))} is deducible in
the {\tt increase-profitability} microtheory whereas {\tt
SHOULD(NOT(raise-price(x)))} is deducible in the {\tt
increase-market-share} microtheory.  Note that statements {\tt
SHOULD(raise-price(x))} and {\tt SHOULD(NOT(raise-price(x)))} do not
formally contradict each other. However, the deadlock of conflicting
recommendations is just as troublesome from an action taking point of
view.

\subsection{Deductive Justification for Decision Making in Concurrent
Systems Requires Arrival Order Assumptions.}

In this section we show we show that an important class of decisions
made in concurrent systems are not deductively derivable.  Therefore
logical deductive proof does not provide an adequate foundation
for decision making in concurrent systems.

Originally the work at ICOT started from an attempt to use Prolog as a
foundation, but has converged to a model very similar to the Actor
model of computation \cite{laws} \cite{agha-phd}.  Researchers at ICOT
have developed a language called {\it Flat Guarded Horn Clauses}
(\fghc{}) \cite{ueda-phd} which was influenced by CP language of Ehud
Shapiro who spent considerable time as a visitor at ICOT.  CP was an
explicit attempt to incorporate the Open Systems capabilities of Actor
languages into a concurrent Horn clause substrate by making {\em read
only annotations} to variables \cite{subset-of-concurrent-prolog}.  The
researchers at ICOT discovered how to dispense with the read only
annotations introduced by Shapiro thereby making FGHC simpler than CP.
{\em Although the syntax of \fghc{} is an extension to that of PROLOG,
the semantics of \fghc{} is in the message-passing tradition of
actors.} Unlike PROLOG there is no backtracking or other search
mechanisms built into \fghc{}.  ICOT plans to use \fghc{} as the
system implementation interface language for their large scale
concurrent computers.

A \fghc{} program is a set of guarded Horn clauses where each clause
the following form:

\[ H :-\; G_1, G_2, \ldots, G_m \mid B_1, B_2, \ldots, B_k \]

\noindent where \(H\) is the head pattern which must match the incoming
message, all of the {\it guards\/} \(G_1, G_2, \ldots, G_m\) are
predefined primitive predicates which must be true in order for the
clause to commit, $\mid$ is the commitment operator, and \(B_1, B_2,
\ldots, B_k\) are messages to be disseminated after the clause commits.

In actor terminology, the unification of the head of the clause and
evaluation of the guards serves a role similar to that of accepting a
communication.   In particular, once the head unified with a message
and the guards satisfied, then the clause {\it commits}, and any other
clauses which have the same predicate as the head are disabled.

\subsection{Forward Chaining Interpretation of \fghc{}}

The developers of \fghc{} have traditionally assigned a {\it backward
chaining interpretation} to \fghc{} in which the head of a clause
\[H\] is interpreted as a goal to be satisfied, the guards
\[G_1, G_2, \ldots, G_m\] as preconditions that must be satisfied
before subgoals can be processed, and \[B_1, B_2, \ldots, B_k\] as
subgoals that must be proved to establish \[H\].

However, in many ways it is more natural to interpret \fghc{} in terms
of constraint driven forward chaining in which the head of a clause
\[H\] is interpreted as a constraint to be satisfied, the guards
\[G_1, G_2, \ldots, G_m\] as preconditions that must be satisfied
before other constraints can be processed, and \[B_1, B_2, \ldots, B_k\] as
consequences of the constraint \[H\].  

From a logical deductive point of view, each \fghc{} commitment is a
new arrival order {\it assumption}.  Thus the traces of
computations carried out by \fghc{} do not represent logical deductive
proofs.  {\em Logical deductive proofs do not allow additional
assumptions to be introduced whenever they happen to be needed!}

The traces of a computation of a \fghc{} system represents a line of
deductive argument that is justified by a potentially exponentially
growing set of assumptions.  There is no logical deductive
justification for the assumptions!  Note that same assumptions are
required by both the backward chaining or constraint driven
forward chaining assumption of \fghc{}.

How the assumptions arise in practice will be illustrated using the
following example \fghc{} program which implements a simple bank
account which responds to deposit, withdraw, and balance messages:

\newpage
\begin{tt}
\begin{tabbing}
account([deposit(Amount, Response) | MoreMessages],\\
   \=    \=Balance,Owner) :-\\
   \>{\it */Deposit Amount into the account*/}\\
   \>   \>plus(Balance, Amount, NewBalance),\\
   \>   \>{\it */Bind NewBalance to the sum of Balance and Amount*/}\\
   \>   \>Response = depositReceipt(Amount, Owner, Balance),\\
   \>   \>{\it */Bind Response to a receipt for the amount*/}\\
   \>   \>account(MoreMessages, NewBalance, Owner).\\
   \>   \>{\it */Assert that the account has balance NewBalance\\
   \>   \>   for MoreMessages*/}\\  
\\
account([balance(Balance) | MoreMessages],\\
   \>    \>Balance, Owner) :-\\
   \>{\it */Query the balance of the account*/}\\
   \>    \>account(MoreMessages, Balance, Owner).\\
   \>    \>{\it */Assert that the account has the same balance and owner\\ 
   \>    \>   for MoreMessages*/}\\   
\\
account([withdraw(Amount, Response) | MoreMessages],\\
   \>    \> Balance,Owner) :-\\ 
   \>{\it */Withdraw Amount from the account*/}\\ 
   \>    \>>=\=(Balance, Amount) |\\
   \>    \>{\it /*If the balance is greater than the withdrawal Amount*/}\\
   \>    \>  \>plus(amount, NewBalance, Balance),\\
   \>    \>  \>{\it */Bind NewBalance to the sum of Balance and Amount*/}\\
   \>    \>  \>Response = withdrawalReceipt(Amount, Owner, Balance),\\
   \>    \>  \>{\it */Bind Response to a withdrawal receipt for Amount*/}\\ 
   \>    \>  \>account(MoreMessages, NewBalance, Owner).\\
   \>    \>  \>{\it */Assert that the account has balance NewBalance\\ 
   \>    \>  \>   for MoreMessages*/}\\  
\\
account([withdraw(Amount, Response) | MoreMessages],\\
         Balance, Owner) :-\\ 
   \>{\it */Withdraw Amount from the account*/}\\ 
   \>    \><(Balance, Amount) |\\
   \>    \>{\it /*If the balance is less than the withdrawal Amount*/}\\
   \>    \>   \> Response = overdraftNotice(Amount, Owner, Balance),\\
   \>    \>   \>{\it */Bind Response to an overdraft notice*/}\\
   \>    \>   \>account(MoreMessages, Balance, Owner).\\
   \>    \>   \>{\it */Assert that the account has the same balance and owner\\  
   \>    \>   \>     for MoreMessages*/}
\end{tabbing}
\end{tt}

\newpage

Consider the following transactions in \fghc{}:     

\begin{verbatim}
account(account1Messages, 100, Clark).
\end{verbatim}

The above statement declares a new variable named account1Messages and
asserts that the balance is 100 and the owner is Clark for
account1Messages.

Suppose that two users named Ueda and Shapiro need to share common
access to Clark's account.  The following statement declares the
variables UedaAccount1Messages and ShapiroAccount1Messages such that
when these two message streams are merged, then the result is
account1Messages:

\begin{verbatim}
merge(UedaAccount1Messages, ShapiroAccount1Messages, account1Messages).
\end{verbatim}

where merge is defined as follows in \fghc{}:

\begin{verbatim}
merge([Message | MoreXs], Ys, [Message | MoreMergedXsAndYs] :-
        merge(MoreXs, Ys, MoreMergedXsAndYs).

merge(Xs, [Message | MoreYs], [Message | MoreMergedXsAndYs] :-
        merge(Xs, MoreYs, MoreMergedXsAndYs). 

merge(Xs, [], Xs).

merge([], Ys, Ys).
\end{verbatim}

The following statements declare use the variables UedaMessages and
ShapiroMessages to provide Ueda and Shapiro respectively with shared
access to account1:

\begin{verbatim}
send(UedaMessages, UedaAccount1Messages, MoreUedaMessages).

send(ShapiroMessages, ShapiroAccount1Messages, MoreShapiroMessages). 
\end{verbatim}

where send can be defined as follows in \fghc{}:

\begin{verbatim}
send(Xs, Message, MoreXs) :-
         Xs = [Message | MoreXs].
\end{verbatim}

Now Ueda and Shapiro can concurrently communicate with account1 using
their respective message streams.  Suppose that Ueda attempts to
withdraw 70 from account1 by asserting the following statement:

\begin{verbatim}
send(UedaAccount1Messages, withdraw(70, UedaResponse), [])
\end{verbatim}

while concurrently Shapiro attempts to withdraw 80 from account1 by
asserting the following statement:

\begin{verbatim}
send(ShapiroAccount1Messages, withdraw(80, ShapiroResponse), [])
\end{verbatim}

By using the \fghc{} definition of send the following statements can
be logically deduced: 

\begin{verbatim}
UedaAccount1Messages = [withdraw(70, UedaResponse)].

ShapiroAccount1Messages = [withdraw(80, ShapiroResponse)].
\end{verbatim}

In order to complete the logical deduction, one of the
following assumptions must be made:

\begin{itemize}

\item Assumption 1:
\begin{verbatim}
account1Messages =
         [withdraw(70, UedaResponse),
          withdraw(80, ShapiroResponse)] 
\end{verbatim}

\item Assumption 2: \begin{verbatim}
account1Messages =
         [withdraw(80, ShapiroResponse),
          withdraw(70, UedaResponse)] 
\end{verbatim}

\end{itemize}

Note that it is of some significance which assumption is made!
If Assumption 1 is made then then following statements deductively
follow:

\begin{verbatim}
UedaResponse = withdrawalReceipt(70, Clark, 30)
ShapiroResponse = overdraftNotice(80, Clark, 30)
\end{verbatim}

\noindent
which state that Ueda gets his money and Shapiro is refused.
Whereas if Assumption 2 is made then the following statements
deductively follow:

\begin{verbatim}
ShapiroResponse = withdrawalReceipt(80, Clark, 20)
UedaResponse = overdraftNotice(70, Clark, 20)
\end{verbatim}

\noindent
which state that Ueda is refused and Shapiro gets his money.

There is no deductive reason to make Assumption 1 as opposed
to Assumption 2.  {\it Both are equally arbitrary new assumptions in
terms of logical deductive inference.}  They are examples of {\it
arrival order assumptions.}

The example presented in this section is in fact archtypical of the
general situation in concurrent systems.  They have the property that
the behavior of the system is often critically affected by the order
of arrival of communications.  The example illustrates the fact that,
in general, the arrival order decisions of a concurrent system are not
deductively derivable and therefore logical deduction does not provide
an adequate decision making basis for computation in concurrent systems. 

The above point is elaborated more fully in the section of this paper
titled ``Greater Knowledge can Mean Greater Deductive Indeterminacy''. 

\section{Action Languages have Mathematical Semantics}

The purpose of this section of the paper is to analyze the semantic
relationship between concurrent action languages and concurrent
languages such as (F)GHC, (F)CP, and (F)Parlog which are based on
extensions on Horn clause syntax.  Consider the implementation of the
shared account discussed above in an Actor language:

\newpage
\begin{tt}
\begin{tabbing}
(\DefBehavior{} account (balance owner)\\
  \=\\
  \>(=> \=(:deposit amount)\\
  \>    \>{\it ;Deposit amount in the account}\\
  \>    \>(\actorReady{} (balance (+ balance amount)))\\
  \>    \>{\it ;Account is ready for the next message with balance incremented by amount}\\
  \>    \>(\actorReturn{} (depositReceipt amount owner balance)))\\
\\
  \>(=> (:balance)\\
  \>    \>{\it ;Query the balance of the account}\\
  \>    \>(\actorReady{})\\
  \>    \>{\it ;Account is ready for the next message}\\
  \>    \>(\actorReturn{} balance))\\
\\
  \>(=> (:withdraw amount)\\
  \>    \>{\it ;Withdraw amount from the account}\\
  \>    \>(\actorIf{} \=(>= balance amount)\\
  \>    \>       \>(\actorThen{} \=\\
  \>    \>       \>         \>(\actorReady{} (balance (- balance amount)))\\
  \>    \>       \>         \>{\it ;Account is ready for the next message}\\
  \>    \>       \>         \>{\it ;with balance decremented by amount}\\
  \>    \>       \>         \>(\actorReturn{} (withdrawalReceipt
amount owner balance)))\\   
  \>    \>       \>(\actorElse{}\\
  \>    \>       \>         \>(\actorReady{})\\
  \>    \>       \>         \>{\it ;Account is ready for the next message}\\
  \>    \>       \>         \>(\actorComplain{} (overdraftNotice amount owner balance))))))) 

\end{tabbing}
\end{tt}

The interactions corresponding to the ones discussed above for \fghc
are as follows: 

\begin{tt}
\begin{tabbing}
(\DefName{} account1Messages (create account 100 Clark))
\end{tabbing}
\end{tt}

The above command declares a new identifier named account1 and
binds it to a new account with balance 100 and owner Clark.

Again suppose that two users named Ueda and Shapiro need to share
access to Clark's account.  The following command gives
them the ability to communicate with account1.

\begin{tt}
\begin{tabbing}
(\actorSend{} Ueda Account1)

(\actorSend{} Shapiro account1)
\end{tabbing}
\end{tt}

Now Ueda and Shapiro can concurrently communicate with account1.
Again suppose that Ueda attempts to
withdraw 70 from account1 using the following command:

\begin{tt}
\begin{tabbing}
(withdraw account1 70)
\end{tabbing}
\end{tt}

\noindent
while concurrently Shapiro attempts to withdraw 80 from account1 by
asserting the following command:

\begin{tt}
\begin{tabbing}
(withdraw account1 80)
\end{tabbing}
\end{tt}

As before the operation of account1 will be properly serialized so
that one or the other will get a withdrawal receipt and the other
will get an overdraft complaint.

The closeness of the actor language and \fghc raises the following
interesting question:  {\em Is an actor action language just as much a
``logic programming'' language as \fghc?} The answer to this question
is very subtle.

Gul Agha building on work by Will Clinger has developed a denotational
message passing semantics for actor action languages based on system
configurations\cite{agha-phd} \cite{clinger-phd}.  Meaning is fully
compositional in that meaning of a system is the parallel composition
of the meaning of its subsystems. More formally if \(S_{1}\) and
\(S_{2}\) are systems then:

\[
{\cal M}(S_{1} \parallel S_{2}) =
{\cal M}(S_{1}) \parallel {\cal M}(S_{2}) \] 

Actor Theory provides a {\ meaning} for the scripts of Actor action
programming languages such as the one presented above for an
implementation of shared accounts.  The meaning is obtained
recursively by analyzing the script as a system of communicating
actors.  It appears that Actor Theory may be equally well suited for
analyzing systems expressed in \fghc{}.

\subsection{Semantics and Behavior}

Message Passing Semantics takes a different perspective on the meaning
of a sentence from that of truth-theoretic semantics.  In
truth-theoretic semantics \cite{tarski-interpretations}, the meaning
of a sentence is determined by the models which make it true.  For
example the conjunction of two sentences is true in a model exactly in
which both of its conjuncts are true in the model.  {\em In contrast
Message Passing Semantics takes the meaning of a message to be the
effect it has on the subsequent behavior of the system.} In other
words the meaning of a message is determined by how it affects the
recipients.  Each partial meaning of a message is constructed by a
recipient in terms of how it is processed (c.f.
\cite{conduit-metaphor}).  The meaning of a message is open ended and
unfolds indefinitely far into the future as other recipients process
the message.

At a deep level, understanding always involves categorization, which
is a function of interactional (rather than inherent) properties and
the perspective of individual viewpoints.  Message Passing Semantics
differs radically from truth-theoretic semantics which assumes that it
is possible to give an account of truth in itself, free of
interactional issues, and that the theory of meaning will be based on
such a theory of truth \cite{Metaphors-We-Live-By}.


\subsection{Greater Knowledge can Mean Greater Deductive Indeterminacy.} 

Some researchers \cite{formal-reasoning} [Genesereth 1986] have
attempted to use metatheories as a control mechanism.  For example
Mike Genesereth has introduced a function named OUGHT so that if a
machine is in a state described by microtheory T then OUGHT(T) is the
microtheory which describes the ``next'' state of the machine.  The
axiomatization of the OUGHT function takes place in a metatheory which
describes the base level microtheories of the states of the machine.

However, the attempt to use logical deductive inference in
metatheories as a control mechanism has an underlying defective
assumption:

\begin{quote}

Increased knowledge about the state and inputs of a machine means
greater deductive knowledge of the subsequent state of the machine.

\end{quote}

The above assumption is contrary to the Heisenberg
Indeterminacy Principle of modern physics  which states:

\begin{quote}

For a machine M which is sensitive to order of arrival of inputs,
{\em greater precision in the knowledge of the closeness of arrival of
two inputs to M results in greater indeterminacy of knowledge of the
subsequent state of M}.  For example the indeterminacy of the state of
M is greater if it is known that the two inputs arrived within two
femtoseconds of each other than if it is known that they arrived with
two microseconds.

\end{quote}

The Indeterminacy Principle is of increasing practical importance as
the asynchrony of large scale concurrent computers increases.
Greater asynchrony causes greater deductive uncertainty of the
subsequent machine behavior even granted complete knowledge of the
structure, initial state, and exact knowledge of all input to the
machine.

It is often claimed that logical deduction has full computational
power because logical formulas can be devised to simulate a Turing
Machine.  However, concurrent computers derive indeterminacy from the
indeterminacy of arrival order of internal communications within the
machine, not by invoking some random element such as by flipping a
coin.  The Quantum Indeterminacy Principle fundamentally constricts
the logical deductive inferences that can be drawn concerning the
subsequent behavior of a concurrent computer.  Therefore is by no
means sufficient simply to be able to simulate Turing Machines.  {\em
Greater computational power than that of a Turning Machine is required
to implement concurrent systems.}\footnote{Agha provides an excellent
exposition of the nature of a mathematical model of concurrent
computation and its differences with classical nondeterministic
automata theories\cite{agha-phd}.}

\section{Due Process}

The logical deduction approach is further limited because the
tree-structured nonsituated locally decidable character of logical
deductive proofs means that audiences cannot be taken into account.
Extradeductive techniques such as negotiation and debate are needed to
deal with the inconsistencies and conflicting actions.

Due Process is an alternative to logical deductive inference as a
foundation for organizational judgment and decision making.  Due
Process is the inherently reflective organizational activity of humans
and computers for generating sound, relevant, and reliable information
as a basis for decision and action within the constraints of allowable
resources.  It provides an arena in which beliefs and proposals can be
gathered, analyzed, and debated.  The problem of Due Process is to
assure that organizations use appropriate mechanisms for gathering,
recognizing, weighing, evaluating, and negotiating conflicting
alternatives \cite{due-process-in-the-workplace}.  Part of Due Process is to
provide a record of the decision making process which can later be
referenced.

Due process produces a record of the decision making and action taking
process including which organization is responsible for dealing with
problems, responses and questions for the decision made or the action
taken.  This is one way in which responsibility is assessed
for the decisions and actions taken.

The record also includes {\i rationales} for various courses of
action such as:

\begin{itemize}

\item {\bf predicted beneficial results:}  Better targeted advertising
will increase sales.

\item {\bf policies guiding conduct:} Products may not be returned for
credit more than 30 days after sale.

\item {\bf reasons tied to specific institutional roles or processes:}
A corporation may not be able to enter the computer business because of a
consent decree that it has signed.

\item {\bf precedent:} The organization might always have taken
Patriot's day as a holiday.  Precedent may seem like a weak rationale.
However, deciding according to precedent in the absence of strong
alternatives has the consequences of predictability, stability,
and improvement in the general coherence between
decided cases.

\end{itemize}

Due process is an inherently self reflective process in that the
process by which information is gathered, organized, compared and
presented is subject to evaluation, debate and evolution within the
organization.  Thus the debate is not just about whether or not to
lower prices, but also about the beliefs use in the decision and the
process used
by the organization to decide whether or not to lower them.

Due Process informs organizational action.  Each instance of Due
Process begins with {\em preconceptions} handed down through
traditions and culture that constitute the initial process but which
are open to future testing and evolution.  Decision making criteria
such as preferences in predicted outcomes are included in this
knowledge base.  For example, increased profitability is preferable to
decreased profitability.  Also, increased market share is preferable
to decreased market share.  Conflicts between these preferences can be
negotiated.  In addition preferences can arise as a result of
conflict.  Negotiating conflict can bring the negotiating process
itself into question as part of the evaluative criteria of how to
proceed which can itself change the nature of the relationship among
the participants \cite{quality-of-life}.

Concurrent open systems are inherently indeterminate.  The
indeterminacy does not stem from invoking a
random element.  It is different from the usual nondeterministic
computation studied in automata theory in which coin flipping is
allowed as an elementary computational step.  In general, it is not
possible to know ahead of time that a concurrent system will make a
decision by a certain time.  Flipping a coin can be used as a method
of forcing decisions to occur by making an arbitrary choice.  However,
often as a matter of principle, due process refuses to invoke
arbitrary random measures such as coin flipping to make a decision.
For example, a jury might not return a verdict and the judge might
have to declare a mistrial.

Due Process provides the foundation to overcome the following
limitations of traditional approaches:

\begin {itemize} \item The soundness of Due Process is explicitly social
spatiotemporal contextual.

\item Due Process makes use of discussion, debate, negotiation to deal
with micromodels that contradict each other and conflicts between
proposed actions.

\item Arrival order is explicit in the semantics of Due Process.

\end{itemize}


\section{Scope and Limits}

The main limitation of Organizational Semantics is common to other
approaches: so far only very small scale systems have been constructed
and it remains to be shown that the mechanisms can be scaled up to
empower large scale computer organizations that operate effectively
with human organizations.  Important theoretical work needs to be done
on the analysis of mechanisms of Due Process including authority,
responsibility, and cooperation.

Furthermore the following important question remains open:  Are the
functional cognitive equivalent of individual humans required as
components in order to order to have effective computer organizations?
If the functional equivalent of individual humans is required, then
the ultimate success of the organizational approach depends on the
development of effective computer individuals.  On the other hand if
effective computer organizations can be constructed from communities of
more primitive elements such as actors, then progress on the
organizational approach will not be nearly so dependent on development
of computer individuals.

\section{Conclusion}

Any organizational belief is subject to internal and external
challenges.  Organizations must efficiently take action and make
decisions in the face of contradictory beliefs and conflicting
possible actions.  How they do so is a fundamental consideration in
understanding how to construct effective organizational information
systems.  Logical deduction is an extremely powerful and useful tool
for exploring the structure, consequences, and limitations of
microtheories and models.  However, within organizations, judgments
play a more central role than logical reasoning, deductions, and
inferences which are useful only {\em within} microtheories.

The process of making organizational judgments depends on:

\begin {itemize}
\item The context in which the judgment is made.  The judgment can be
critiqued at another time and place but the critique in turn is a
situated action

\item Effective use of contradictory microtheories to transcend their
individual limits in context.

\item Dealing with conflicts among possible actions.

\item Effective use of the order in which information is received and
generated.

\item Greater knowledge can mean greater deductive indeterminacy.

\end{itemize}

\section{Acknowledgments}

I would like to acknowledge the help of Gul Agha, Jonathon Amsterdam,
Peter de Jong, Paul Harmon, Michael Ernst, Carl Manning, Richard
Waldinger, and Fanya Montalvo in improving the presentation.  I owe a
tremendous intellectual debt to my colleagues in the Message Passing
Semantics Group, the Tremont Research Institute, and the MIT
Artificial Intelligence Laboratory.  Ken Kahn, Ueda, Keith Clark, and
Takeuchi greatly helped me to understand (Flat) Concurrent Prolog,
(Flat) Parlog, and (Flat) Guarded Horn Clauses.

This paper describes research done at the Artificial Intelligence
Laboratory of the Massachusetts Institute of Technology.  Major
support for the research reported in this paper was provided by the
System Development Foundation.  Major support for other related work
in the Artificial Intelligence Laboratory is provided, in part, by the
Advanced Research Projects Agency of the Department of Defense under
Office of Naval Research contract N0014-80-C-0505.  I would like to
thank Carl York, Charles Smith, and Patrick Winston for their support
and encouragement.                        ;

%%\newpage

%% Notice: When you specify the Name of the Bibliography File
%%         DO NOT specify the ``.bib'' suffix because
%%         LaTeX appends the ``.bib'' onto this thing.

\bibliography{biblio}

%% So far so good, except you now have to run this file through 
%% LaTeX. Then, run the file through BibTeX. Then
%% run the file through LaTeX twice more. If everything
%% goes well, the alphabetically sorted REFERENCES will appear
%% at the end of the Document. Note ``REFERENCES'' is what's used
%% for Article documentstyles. ``Report'', etc., cause a
%% ``BIBLIOGRAPHY'' to be printed instead.



%% to protect the end of the document.
\end{document}